65 research outputs found

    A deep evaluator for image retargeting quality by geometrical and contextual interaction

    Get PDF
    An image is compressed or stretched during the multidevice displaying, which will have a very big impact on perception quality. In order to solve this problem, a variety of image retargeting methods have been proposed for the retargeting process. However, how to evaluate the results of different image retargeting is a very critical issue. In various application systems, the subjective evaluation method cannot be applied on a large scale. So we put this problem in the accurate objective-quality evaluation. Currently, most of the image retargeting quality assessment algorithms use simple regression methods as the last step to obtain the evaluation result, which are not corresponding with the perception simulation in the human vision system (HVS). In this paper, a deep quality evaluator for image retargeting based on the segmented stacked AutoEnCoder (SAE) is proposed. Through the help of regularization, the designed deep learning framework can solve the overfitting problem. The main contributions in this framework are to simulate the perception of retargeted images in HVS. Especially, it trains two separated SAE models based on geometrical shape and content matching. Then, the weighting schemes can be used to combine the obtained scores from two models. Experimental results in three well-known databases show that our method can achieve better performance than traditional methods in evaluating different image retargeting results

    No reference quality assessment of stereo video based on saliency and sparsity

    Get PDF
    With the popularity of video technology, stereoscopic video quality assessment (SVQA) has become increasingly important. Existing SVQA methods cannot achieve good performance because the videos' information is not fully utilized. In this paper, we consider various information in the videos together, construct a simple model to combine and analyze the diverse features, which is based on saliency and sparsity. First, we utilize the 3-D saliency map of sum map, which remains the basic information of stereoscopic video, as a valid tool to evaluate the videos' quality. Second, we use the sparse representation to decompose the sum map of 3-D saliency into coefficients, then calculate the features based on sparse coefficients to obtain the effective expression of videos' message. Next, in order to reduce the relevance between the features, we put them into stacked auto-encoder, mapping vectors to higher dimensional space, and adding the sparse restraint, then input them into support vector machine subsequently, and finally, get the quality assessment scores. Within that process, we take the advantage of saliency and sparsity to extract and simplify features. Through the later experiment, we can see the proposed method is fitting well with the subjective scores

    Sparse representation based stereoscopic image quality assessment accounting for perceptual cognitive process

    Get PDF
    In this paper, we propose a sparse representation based Reduced-Reference Image Quality Assessment (RR-IQA) index for stereoscopic images from the following two perspectives: 1) Human visual system (HVS) always tries to infer the meaningful information and reduces uncertainty from the visual stimuli, and the entropy of primitive (EoP) can well describe this visual cognitive progress when perceiving natural images. 2) Ocular dominance (also known as binocularity) which represents the interaction between two eyes is quantified by the sparse representation coefficients. Inspired by previous research, the perception and understanding of an image is considered as an active inference process determined by the level of “surprise”, which can be described by EoP. Therefore, the primitives learnt from natural images can be utilized to evaluate the visual information by computing entropy. Meanwhile, considering the binocularity in stereo image quality assessment, a feasible way is proposed to characterize this binocular process according to the sparse representation coefficients of each view. Experimental results on LIVE 3D image databases and MCL database further demonstrate that the proposed algorithm achieves high consistency with subjective evaluation

    Stereoscopic video quality assessment based on 3D convolutional neural networks

    Get PDF
    The research of stereoscopic video quality assessment (SVQA) plays an important role for promoting the development of stereoscopic video system. Existing SVQA metrics rely on hand-crafted features, which is inaccurate and time-consuming because of the diversity and complexity of stereoscopic video distortion. This paper introduces a 3D convolutional neural networks (CNN) based SVQA framework that can model not only local spatio-temporal information but also global temporal information with cubic difference video patches as input. First, instead of using hand-crafted features, we design a 3D CNN architecture to automatically and effectively capture local spatio-temporal features. Then we employ a quality score fusion strategy considering global temporal clues to obtain final video-level predicted score. Extensive experiments conducted on two public stereoscopic video quality datasets show that the proposed method correlates highly with human perception and outperforms state-of-the-art methods by a large margin. We also show that our 3D CNN features have more desirable property for SVQA than hand-crafted features in previous methods, and our 3D CNN features together with support vector regression (SVR) can further boost the performance. In addition, with no complex preprocessing and GPU acceleration, our proposed method is demonstrated computationally efficient and easy to use

    Blind assessment for stereo images considering binocular characteristics and deep perception map based on deep belief network

    Get PDF
    © 2018 Elsevier Inc. In recent years, blind image quality assessment in the field of 2D image/video has gained the popularity, but its applications in 3D image/video are to be generalized. In this paper, we propose an effective blind metric evaluating stereo images via deep belief network (DBN). This method is based on wavelet transform with both 2D features from monocular images respectively as image content description and 3D features from a novel depth perception map (DPM) as depth perception description. In particular, the DPM is introduced to quantify longitudinal depth information to align with human stereo visual perception. More specifically, the 2D features are local histogram of oriented gradient (HoG) features from high frequency wavelet coefficients and global statistical features including magnitude, variance and entropy. Meanwhile, the global statistical features from the DPM are characterized as 3D features. Subsequently, considering binocular characteristics, an effective binocular weight model based on multiscale energy estimation of the left and right images is adopted to obtain the content quality. In the training and testing stages, three DBN models for the three types features separately are used to get the final score. Experimental results demonstrate that the proposed stereo image quality evaluation model has high superiority over existing methods and achieve higher consistency with subjective quality assessments

    No-reference stereoscopic image-quality metric accounting for left and right similarity map and spatial structure degradation

    Get PDF
    Blind quality assessment of 3D images is used to confront more real challenges than 2D images. In this Letter, we develop a no-reference stereoscopic image quality assessment (SIQA) model based on the proposed left and right (LR)-similarity map and structural degradation. In the proposed method, local binary pattern features are extracted from the cyclopean image that are effective for describing the distortion of 3D images. More importantly, we first propose the LR-similarity map that can indicate the stereopair quality and demonstrate that the use of LR-similarity information results in a consistent improvement in the performance. The massive experimental results on the LIVE 3D and IRCCyN IQA databases demonstrate that the designed model is strongly correlated to subjective quality evaluations and competitive to the state-of-the-art SIQA algorithms

    Precise measurement of position and attitude based on convolutional neural network and visual correspondence relationship

    Get PDF
    Accurate measurement of position and attitude information is particularly important. Traditional measurement methods generally require high-precision measurement equipment for analysis, leading to high costs and limited applicability. Vision-based measurement schemes need to solve complex visual relationships. With the extensive development of neural networks in related fields, it has become possible to apply them to the object position and attitude. In this paper, we propose an object pose measurement scheme based on convolutional neural network and we have successfully implemented end-toend position and attitude detection. Furthermore, to effectively expand the measurement range and reduce the number of training samples, we demonstrated the independence of objects in each dimension and proposed subadded training programs. At the same time, we generated generating image encoder to guarantee the detection performance of the training model in practical applications

    Quality index for stereoscopic images by jointly evaluating cyclopean amplitude and cyclopean phase

    Get PDF
    With widespread applications of three-dimensional (3-D) technology, measuring quality of experience for 3-D multimedia content plays an increasingly important role. In this paper, we propose a full reference stereo image quality assessment (SIQA) framework which focuses on the innovation of binocular visual properties and applications of low-level features. On one hand, based on the fact that human visual system understands an image mainly according to its low-level features, local phase and local amplitude extracted from phase congruency measurement are employed as primary features. Considering the less prominent performance of amplitude in IQA, visual saliency is applied into the modification on amplitude. On the other hand, by fully considering binocular rivalry phenomena, we create the cyclopean amplitude map and cyclopean phase map. With this method, both image features and binocular visual properties are mutually combined with each other. Meanwhile, a novel binocular modulation function in spatial domain is also adopted into the overall quality prediction of amplitude and phase. Extensive experiments demonstrate that the proposed framework achieves higher consistency with subjective tests than relevant SIQA metrics

    A blind stereoscopic image quality evaluator with segmented stacked autoencoders considering the whole visual perception route

    Get PDF
    Most of the current blind stereoscopic image quality assessment (SIQA) algorithms cannot show reliable accuracy. One reason is that they do not have the deep architectures and the other reason is that they are designed on the relatively weak biological basis, compared with findings on human visual system (HVS). In this paper, we propose a Deep Edge and COlor Signal INtegrity Evaluator (DECOSINE) based on the whole visual perception route from eyes to the frontal lobe, and especially focus on edge and color signal processing in retinal ganglion cells (RGC) and lateral geniculate nucleus (LGN). Furthermore, to model the complex and deep structure of the visual cortex, Segmented Stacked Auto-encoder (S-SAE) is used, which has not utilized for SIQA before. The utilization of the S-SAE complements weakness of deep learning-based SIQA metrics that require a very long training time. Experiments are conducted on popular SIQA databases, and the superiority of DECOSINE in terms of prediction accuracy and monotonicity is proved. The experimental results show that our model about the whole visual perception route and utilization of S-SAE are effective for SIQA

    Quality assessment metric of stereo images considering cyclopean integration and visual saliency

    Get PDF
    In recent years, there has been great progress in the wider use of three-dimensional (3D) technologies. With increasing sources of 3D content, a useful tool is needed to evaluate the perceived quality of the 3D videos/images. This paper puts forward a framework to evaluate the quality of stereoscopic images contaminated by possible symmetric or asymmetric distortions. Human visual system (HVS) studies reveal that binocular combination models and visual saliency are the two key factors for the stereoscopic image quality assessment (SIQA) metric. Therefore inspired by such findings in HVS, this paper proposes a novel saliency map in SIQA metric for the cyclopean image called “cyclopean saliency”, which avoids complex calculations and produces good results in detecting saliency regions. Moreover, experimental results show that our metric significantly outperforms conventional 2D quality metrics and yields higher correlations with human subjective judgment than the state-of-art SIQA metrics. 3D saliency performance is also compared with “cyclopean saliency” in SIQA. It is noticed that the proposed metric is applicable to both symmetric and asymmetric distortions. It can thus be concluded that the proposed SIQA metric can provide an effective evaluation tool to assess stereoscopic image quality
    corecore